Q&A with Steve Morlidge of CatchBull (Part 4)

0

Q: ­How would you set the target for demand planners: all products at 0.7? All at practical limit (0.5)?­

A: In principle, forecasts are capable of being brought to the practical limit of an RAE of 0.5.

Whether it is sensible to attempt to do this for all products irrespective of the amount of effort and resources involved in achieving it is another matter. It would be much more sensible to set aspirations based upon considerations such as the business benefit of making an improvement, which may be based on the size of the products, the perceived scope for improvement or strategic considerations such as the importance of a set of products in a portfolio.

Target setting also has a large psychological dimension which needs to be taken into account. For example there is a lot of evidence that unrealistic targets unilaterally imposed on people can be demotivating whereas where individuals are allowed to set their own targets they are usually more stretching than those they are given to them by others.

One approach which avoids some of the pitfalls associated with traditional target setting is to strive for continuous improvement (where the target is in effect to beat past performance) in tandem with benchmarking, whereby peer pressure and the transfer of knowledge and best practice drives performance forward.

Q: ­Does the thinking change when you are forecasting multiple periods forward?­

A: In some respects the approach does not change. The limits of what can be achieved are still the same since it is given by the level of the noise in a data series compared to the level and nature of change in the signal.

Of course, the further out one forecasts the more inaccurate forecasts are likely to be because there is more opportunity for the signal to change in ways that cannot be anticipated. So while the theoretical lower bound for forecast error doesn’t change, the practical difficulties in achieving it get larger, so we would expect RAE to deteriorate the further ahead we forecast.

There is also an impact of the upper bound. With ‘one period ahead’ forecasts there is no reason why performance should be worse than the ‘same as last period’ naïve forecast (RAE=1.0). This doesn’t apply when one is forecasting more than one period ahead; the default stance of ‘use the latest actuals’ means that with a lag of n periods the upper limit should be the n period previous actual. In practice this will usually result in a maximum acceptable RAE of more than 1.0. The exact upper limit will need to be calculated by comparing the 1 period ahead naïve forecast error with the n period ahead number. The greater the trend in the data the larger this difference will be.

Q: ­Can you comment on the number of observations you would use to estimate RAE?

A: As with any measure, the more data points you have the more representative (and therefore reliable) the number. On the other hand, RAE can change over time and if you have a large amount of historical data which you then average these important shifts in performance will be lost.

As a rule of thumb I would be uncomfortable taking any significant decisions (to change forecast methods, etc.) with less than 6 data points.

Q: ­Have you found that more people inputting into the forecast adds value or destroys value?  For example, inputs from several levels of sales such as account managers, country managers, demand planners, executives, etc. ­

A: I do not have enough evidence to draw any hard and fast conclusions but I would be surprised if the answer to this question was not ‘it depends’. Some interventions made by some people at some times will be helpful; others made by other people at other times will be unhelpful.

The key to answering this question is to measure the contribution using Forecast Value Added. In addition, interventions are more likely to add value when the data series is very volatile particularly if it is driven by market place activity which is capable of being judgementally estimated reliably. If a data series is very stable, the opportunity to improve matters tends to be eclipsed by the risk of making things worse.

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Comments are closed.

Back to Top